Skip to content

Conversation

@yiwu0b11
Copy link

@yiwu0b11 yiwu0b11 commented Dec 15, 2025

This patch adds mid-end support for vectorized min/max reduction operations for half floats. It also includes backend AArch64 support for these operations.
Both floating point min/max reductions don’t require strict order, because they are associative.

It will generate NEON fminv/fmaxv reduction instructions when max vector length is 8B or 16B. On SVE supporting machines with vector lengths > 16B, it will generate the SVE fminv/fmaxv instructions.
The patch also adds support for partial min/max reductions on SVE machines using fminv/fmaxv.

Ratio of throughput(ops/ms) > 1 indicates the performance with this patch is better than the mainline.

Neoverse N1 (UseSVE = 0, max vector length = 16B):

Benchmark         vectorDim  Mode   Cnt     8B    16B
ReductionMaxFP16   256       thrpt 9      3.69   6.44
ReductionMaxFP16   512       thrpt 9      3.71   7.62
ReductionMaxFP16   1024      thrpt 9      4.16   8.64
ReductionMaxFP16   2048      thrpt 9      4.44   9.12
ReductionMinFP16   256       thrpt 9      3.69   6.43
ReductionMinFP16   512       thrpt 9      3.70   7.62
ReductionMinFP16   1024      thrpt 9      4.16   8.64
ReductionMinFP16   2048      thrpt 9      4.44   9.10

Neoverse V1 (UseSVE = 1, max vector length = 32B):

Benchmark         vectorDim  Mode   Cnt     8B    16B    32B
ReductionMaxFP16   256       thrpt 9      3.96   8.62   8.02
ReductionMaxFP16   512       thrpt 9      3.54   9.25  11.71
ReductionMaxFP16   1024      thrpt 9      3.77   8.71  14.07
ReductionMaxFP16   2048      thrpt 9      3.88   8.44  14.69
ReductionMinFP16   256       thrpt 9      3.96   8.61   8.03
ReductionMinFP16   512       thrpt 9      3.54   9.28  11.69
ReductionMinFP16   1024      thrpt 9      3.76   8.70  14.12
ReductionMinFP16   2048      thrpt 9      3.87   8.45  14.70

Neoverse V2 (UseSVE = 2, max vector length = 16B):

Benchmark         vectorDim  Mode   Cnt     8B    16B
ReductionMaxFP16   256       thrpt 9      4.78  10.00
ReductionMaxFP16   512       thrpt 9      3.74  11.33
ReductionMaxFP16   1024      thrpt 9      3.86   9.59
ReductionMaxFP16   2048      thrpt 9      3.94   8.71
ReductionMinFP16   256       thrpt 9      4.78  10.00
ReductionMinFP16   512       thrpt 9      3.74  11.29
ReductionMinFP16   1024      thrpt 9      3.86   9.58
ReductionMinFP16   2048      thrpt 9      3.94   8.71

Testing:
hotspot_all, jdk (tier1-3) and langtools (tier1) all pass on Neoverse N1/V1/V2.


Progress

  • Change must be properly reviewed (1 review required, with at least 1 Reviewer)
  • Change must not contain extraneous whitespace
  • Commit message must refer to an issue

Issue

  • JDK-8373344: Add support for min/max reduction operations for Float16 (Enhancement - P4)

Reviewing

Using git

Checkout this PR locally:
$ git fetch https://git.openjdk.org/jdk.git pull/28828/head:pull/28828
$ git checkout pull/28828

Update a local copy of the PR:
$ git checkout pull/28828
$ git pull https://git.openjdk.org/jdk.git pull/28828/head

Using Skara CLI tools

Checkout this PR locally:
$ git pr checkout 28828

View PR using the GUI difftool:
$ git pr show -t 28828

Using diff file

Download this PR as a diff file:
https://git.openjdk.org/jdk/pull/28828.diff

Using Webrev

Link to Webrev Comment

This patch adds mid-end support for vectorized min/max reduction
operations for half floats. It also includes backend AArch64 support
for these operations.
Both floating point min/max reductions don’t require strict order,
because they are associative.

It will generate NEON fminv/fmaxv reduction instructions when
max vector length is 8B or 16B. On SVE supporting machines
with vector lengths > 16B, it will generate the SVE fminv/fmaxv
instructions.
The patch also adds support for partial min/max reductions on
SVE machines using fminv/fmaxv.

Ratio of throughput(ops/ms) > 1 indicates the performance with
this patch is better than the mainline.

Neoverse N1 (UseSVE = 0, max vector length = 16B):
Benchmark         vectorDim  Mode   Cnt     8B    16B
ReductionMaxFP16   256       thrpt 9      3.69   6.44
ReductionMaxFP16   512       thrpt 9      3.71   7.62
ReductionMaxFP16   1024      thrpt 9      4.16   8.64
ReductionMaxFP16   2048      thrpt 9      4.44   9.12
ReductionMinFP16   256       thrpt 9      3.69   6.43
ReductionMinFP16   512       thrpt 9      3.70   7.62
ReductionMinFP16   1024      thrpt 9      4.16   8.64
ReductionMinFP16   2048      thrpt 9      4.44   9.10

Neoverse V1 (UseSVE = 1, max vector length = 32B):
Benchmark         vectorDim  Mode   Cnt     8B    16B    32B
ReductionMaxFP16   256       thrpt 9      3.96   8.62   8.02
ReductionMaxFP16   512       thrpt 9      3.54   9.25  11.71
ReductionMaxFP16   1024      thrpt 9      3.77   8.71  14.07
ReductionMaxFP16   2048      thrpt 9      3.88   8.44  14.69
ReductionMinFP16   256       thrpt 9      3.96   8.61   8.03
ReductionMinFP16   512       thrpt 9      3.54   9.28  11.69
ReductionMinFP16   1024      thrpt 9      3.76   8.70  14.12
ReductionMinFP16   2048      thrpt 9      3.87   8.45  14.70

Neoverse V2 (UseSVE = 2, max vector length = 16B):
Benchmark         vectorDim  Mode   Cnt     8B    16B
ReductionMaxFP16   256       thrpt 9      4.78  10.00
ReductionMaxFP16   512       thrpt 9      3.74  11.33
ReductionMaxFP16   1024      thrpt 9      3.86   9.59
ReductionMaxFP16   2048      thrpt 9      3.94   8.71
ReductionMinFP16   256       thrpt 9      4.78  10.00
ReductionMinFP16   512       thrpt 9      3.74  11.29
ReductionMinFP16   1024      thrpt 9      3.86   9.58
ReductionMinFP16   2048      thrpt 9      3.94   8.71

Testing:
hotspot_all, jdk (tier1-3) and langtools (tier1) all pass on
Neoverse N1/V1/V2.
@bridgekeeper
Copy link

bridgekeeper bot commented Dec 15, 2025

👋 Welcome back yiwu0b11! A progress list of the required criteria for merging this PR into master will be added to the body of your pull request. There are additional pull request commands available for use with this pull request.

@openjdk
Copy link

openjdk bot commented Dec 15, 2025

❗ This change is not yet ready to be integrated.
See the Progress checklist in the description for automated requirements.

@openjdk openjdk bot added hotspot-compiler hotspot-compiler-dev@openjdk.org core-libs core-libs-dev@openjdk.org labels Dec 15, 2025
@openjdk
Copy link

openjdk bot commented Dec 15, 2025

@yiwu0b11 The following labels will be automatically applied to this pull request:

  • core-libs
  • hotspot-compiler

When this pull request is ready to be reviewed, an "RFR" email will be sent to the corresponding mailing lists. If you would like to change these labels, use the /label pull request command.

@openjdk openjdk bot added the rfr Pull request is ready for review label Dec 15, 2025
@mlbridge
Copy link

mlbridge bot commented Dec 15, 2025

Webrevs

@yiwu0b11 yiwu0b11 changed the title 8373344: Add support for FP16 min/max reduction operations 8373344: Add support for min/max reduction operations for Float16 Dec 15, 2025
Copy link
Contributor

@galderz galderz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @yiwu0b11, some superficial comments

@yiwu0b11
Copy link
Author

yiwu0b11 commented Jan 5, 2026

Thanks @galderz for the code review, I've updated the code and also replaced assert with verify

Comment on lines +380 to +381
case Op_MinReductionVHF:
case Op_MaxReductionVHF:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can use the NEON instructions if the vector size <= 16B as well for partial cases. Did you test the performance with NEON instead of using predicated SVE instructions?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean move it down, like Op_AddReductionVI and Op_AddReductionVL to use return !VM_Version::use_neon_for_vector(length_in_bytes);?
It doesn't to make much of a difference.

Neoverse V1 (UseSVE = 1, max vector length = 32B)
Benchmark           vectorDim  Mode   Cnt   8B(old) 8B(new) chg2/chg1   16B(old) 16B(new) chg2/chg1   32B(old) 32B(new) chg2/chg1
ReductionMaxFP16       256     thrpt    9     3.96     3.96     1.00        8.63     8.62     1.00        8.02     8.02     1.00
ReductionMaxFP16       512     thrpt    9     3.54     3.54     1.00        9.25     9.25     1.00       11.71    11.71     1.00
ReductionMaxFP16      1024     thrpt    9     3.77     3.77     1.00        8.70     8.71     1.00       14.12    14.07     1.00
ReductionMaxFP16      2048     thrpt    9     3.88     3.88     1.00        8.45     8.44     1.00       14.69    14.69     1.00
ReductionMinFP16       256     thrpt    9     3.96     3.96     1.00        8.62     8.61     1.00        8.02     8.03     1.00
ReductionMinFP16       512     thrpt    9     3.55     3.54     1.00        9.26     9.28     1.00       11.72    11.69     1.00
ReductionMinFP16      1024     thrpt    9     3.76     3.76     1.00        8.69     8.70     1.00       14.10    14.12     1.00
ReductionMinFP16      2048     thrpt    9     3.87     3.87     1.00        8.44     8.45     1.00       14.76    14.70     1.00

Neoverse V2 (UseSVE = 2, max vector length = 16B)
Benchmark           vectorDim  Mode   Cnt   8B(old) 8B(new) chg2/chg1   16B(old) 16B(new) chg2/chg1
ReductionMaxFP16       256     thrpt    9     4.77     4.78     1.00       10.00    10.00     1.00
ReductionMaxFP16       512     thrpt    9     3.75     3.74     1.00       11.32    11.33     1.00
ReductionMaxFP16      1024     thrpt    9     3.87     3.86     1.00        9.59     9.59     1.00
ReductionMaxFP16      2048     thrpt    9     3.94     3.94     1.00        8.72     8.71     1.00
ReductionMinFP16       256     thrpt    9     4.77     4.78     1.00        9.97    10.00     1.00
ReductionMinFP16       512     thrpt    9     3.77     3.74     0.99       11.35    11.29     0.99
ReductionMinFP16      1024     thrpt    9     3.86     3.86     1.00        9.56     9.58     1.00
ReductionMinFP16      2048     thrpt    9     3.94     3.94     1.00        8.71     8.71     1.00

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean move it down, like Op_AddReductionVI and Op_AddReductionVL to use return !VM_Version::use_neon_for_vector(length_in_bytes);?

Yes, that was what I mean.

It doesn't to make much of a difference.

So what does 8B/16B/32B mean? I guess it means the real vector size of the reduction operation? But how did you test these cases, as I noticed the code of benchmarks do not have any parallelization differences. Is the vectorization factor decided by using different MaxVectorSize vm option ? If so, then I think the partial cases are not touched. Could you please check whether instruction of VectorMaskGenNode is generated from the generated code? I assume there should be difference, because for partial cases (vector_size < MaxVectorSize), it uses the SVE predicated instructions before, while it uses NEON instructions after. And the instruction latency/throughput of SVE reduction are much worse than NEON ones.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

core-libs core-libs-dev@openjdk.org hotspot-compiler hotspot-compiler-dev@openjdk.org rfr Pull request is ready for review

Development

Successfully merging this pull request may close these issues.

3 participants